29 research outputs found

    The EHA Research Roadmap: Normal Hematopoiesis.

    Get PDF
    International audienceIn 2016, the European Hematology Association (EHA) published the EHA Roadmap for European Hematology Research1 aiming to highlight achievements in the diagnostics and treatment of blood disorders, and to better inform European policy makers and other stakeholders about the urgent clinical and scientific needs and priorities in the field of hematology. Each section was coordinated by 1–2 section editors who were leading international experts in the field. In the 5 years that have followed, advances in the field of hematology have been plentiful. As such, EHA is pleased to present an updated Research Roadmap, now including 11 sections, each of which will be published separately. The updated EHA Research Roadmap identifies the most urgent priorities in hematology research and clinical science, therefore supporting a more informed, focused, and ideally a more funded future for European hematology research. The 11 EHA Research Roadmap sections include Normal Hematopoiesis; Malignant Lymphoid Diseases; Malignant Myeloid Diseases; Anemias and Related Diseases; Platelet Disorders; Blood Coagulation and Hemostatic Disorders; Transfusion Medicine; Infections in Hematology; Hematopoietic Stem Cell Transplantation; CAR-T and Other Cell-based Immune Therapies; and Gene Therapy

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Use of persistent identifiers in the publication and citation of scientific data

    Get PDF
    In the last decade the primary data, research is based on has become a third pillar of scientific work alongside with theoretical reasoning and experiment. Greatly increased computing power and storage, together with web services and other electronic resources have facilitated a quantum leap in new research based on the analysis of great amounts of data. However, traditional scientific communication only slowly changes to new media other than an emulation of paper. This leaves many data inaccessible and, in the long run exposes valuable data to the risk of loss. Most important to the availabilty of data is a valid citation. This means that all fields mandatory for a bibliographic citation are included. In addition a mechanism is needed that ensures that the location of the referenced data on the Internet can be resolved on a long-term. Just using URLs by doing "data management on web servers" does not help at all because it is short-lived, mostly becoming invalid after just a few months. Data publication on the Internet therefore needs a system of reliable pointers to each digital object as integral part of the citation. To achieve this persistence of identifiers for their conventional publications many scientific publishers use Digital Object Identifiers (DOI). The identifier is resolved through the handle system to the valid location (URL) where the dataset can be found. This approach meets one of the prerequisites for citeability of scientific data published online. In addition, the valid bibliographic citation can be included in the catalogues of Libraries. To improve access to data and to create incentives for scientists to make their data accessible, some german data centers initiated a project on publication and citation of scientific data. The project "Publication and Citation of Scientific Data" (STD-DOI) was funded by the German Science Foundation (DFG) between 2003 and 2008. In STD-DOI the German National Library for Science and Technology (TIB Hannover), together with the German Research Centre for Geoscience (GFZ Potsdam), the Alfred Wegener Institute for Polar and Marine Research (AWI) Bremerhaven, the University of Bremen, the Max Planck Institute for Meteorology in Hamburg, and the DLR German Remote Sensing Data Center set up the first system to assign DOIs to data sets and finaly to its publications. The STD-DOI system for data publication is now used by eight data publication agents. Data publication through specific agents addresses specific user communities and cater for their requirements in the data publication process. The registration process between TIB and the publication agents is based on a SOAP web service. This presentation will show the organisational and technical aspects of the data publication process through the STD-DOI project and give examples of a successful workflow towards established data citations in the earth sciences

    Every Bit Counts: Publication and Citation of Data in the Earth Sciences

    Get PDF
    Intensive research in the earth sciences over the past decades has created a tremendous wealth of literature, data, and material collections. So far, literature, data and sample collections have been separated. Information technology and the internet, in particular the new cyberinfrastructures for the earth sciences, offer ways to interlink literature, data and samples, creating the potential for new interpretations of the data and materials beyond the interpretation already published in the literature. To achieve this, technical, editorial and custodial issues need to be resolved. A key to this is the use of persistent identifiers for literature, data and sample collection objects. Past experience has shown that URLs are transient, but systems of persistent identifiers (e.g. handle.net, DOI, URN) already exist and can be used to reference these objects. The project "Publication and citation of scientific primary data" (STD-DOI) shows prototypically how these criteria can be met and implements a system for the publication of scientific data, which is open to the scientific community in any scientific field. This project uses persistent identifiers (DOI, handle.net and URN) to identify datasets available in a digital format. In addition, the data publications may be included into the catalogue of the German National Library of Science and Technology (TIB). Data at finer granularity are only identified by generic handle.net IDs, not by DOIs. Ideally, literature should already reference the materials used and the data derived from these. Since this is not yet done, repositories publishing data and tracking sample material record the literature based on these data and samples in their databases. In the case of the STD-DOI project, its metadata profile includes identifiers of related material, e.g. literature interpreting the data, related datasets, or samples from which the data were derived. These metadata can then be used to create ontologies interlinking literature, data and samples. The challenging task ahead is that of interlinking literature, data and samples with as little editorial work as possible. Keeping the amount of work small is essential to allow the indexing of the back catalogue of already existing works. A key technology to solve this task is the automatic creation of ontologies, which can be generated automatically by text mining applications. These ontologies can be combined with ontologies generated from reference lists and from metadata. This presentation will look at existing systems for data publication (STD-DOI project, http://www.std-doi.de), sample identification (SESAR project, http://www.geosamples.org) and for the management of interconnected literature, data publications and sample collections (TaxonConcept, http://taxonconcept.stratigraphy.net), and how these systems can be used to enable new discoveries in the earth sciences
    corecore